perm filename PERLIS[S84,JMC] blob
sn#758261 filedate 1984-06-02 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 perlis[s84,jmc] (to: Don Perlis) (re: Berofsky book)
C00012 ENDMK
C⊗;
perlis[s84,jmc] (to: Don Perlis) (re: Berofsky book)
"perlis%umcp-cs.csnet"@CSNET-RELAY,"dam%oz"@mc
Free will and determinism
Dear Don:
This letter should be considered as submitted to DAM's refereed
Logic in AI mailing list, should it be implemented. This is why I put the
references in a proper style.
Thanks for recommending (Berofsky 1966) which took me a while to
read.
As you said, Hobart was more or less on our side. However, it
seems to me that AI has a lot to say about the question of what free will
is that was missed by the philosophers. There are several key areas where
AI can contribute and needs to contp¬rute for its own benefit.
1. AI is inclined to take the "design stance" as Daniel Dennett
calls it. Therefore, the issue for us is what attitude shall we build
into a robot about its own free will and that of other robots and people.
Clearly a robot needs to apply the word "can" to itself. If it is to
choose satisfactorily among alternatives it must reason about what it can
and cannot do.
Notice that it is much more important for it to reason about the
future than about the past. It may sometimes be worthwhile for it to
reason about what it could have done, i.e. in order to learn from its
experience to do better next time, but this is less important for its
useful function than listing its present alternative feasible objectives.
2. The philosophers' emphasis on "could have" rather than "can" is
a consequence of their interest in whether praise or blame or reward or
punishment can appropriately be attached to actions. While these are
interesting problems, it is important to notice that the problem of "can"
can be attacked without touching them.
3. I am still of the opinion expounded in my paper with Pat Hayes
(1969), that what a robot (or automaton) can do is describable in terms of an
automaton system in which the robot's outputs are replaced by external
inputs to the system. However, that paper left itself open to carping by
not making clear enough that a wide variety of different automaton systems
are needed to explicate the variety of useful human usages of "can".
4. Notice also that the question of moral judgment can be
discussed independently of reward or punishment. A child might say, "It's
a bad robot, it kicked my cat" without having any opinion on whether
robots are more appropriately punished or debugged.
5. While we're on children, I remember that my younger daughter at
a quite early age replied to a question of whether she could do something
with the statement, "I can, but I won't". We have a lot of work to do
before robots can behave in this sophisticated a way. Even a useful use
of "but" won't be trivial to achieve.
ADDITIONAL QUESTIONS
Here are some questions suggested by the papers.
1. A person often wishes that his wishes were different from what
they are. For example, a person may wish he wasn't hungry or didn't want
a cigarette. Is it ever worthwhile to program robots this way? It would
seem that, unlike a human, a robot should be able to change its wishes
arbitrarily, so that its higher level goal would automatically take
precedence.
2. What about unrealizable wishes? Is it ever appropriate for the
robot to wish that an airplane flight were shorter? It seems to me that
the answer is yes. In the first place, it may not be obvious whether a
wish is realizable. Unexpected opportunities may arise. Second, the
remembered experience of wishes, even unrealizable ones, affects future
behavior. Our robot may now put some value on avoiding long airplane
flights.
3. The simplest way to look at a robot or even a human is to
regard its motivation as unitary. It weighs various aspects of the
possible outcomes of its actions and decides on an overall preference.
For humans this models seems unrealistic. We avoid weighing different
motivations against one another as long as possible, because it is
difficult and pursue different goals in parallel and suffer when they turn
out to be incompatible. It may be worthwhile to program robots in a
similar way to pursue goals in parallel and decide among them only when
they turn out to be incompatible in the immediate future.
Indeed it may be too difficult to design an overall evaluation
function. How to weigh providing its master with an ice cream soda
against avoiding the risk of exciting another person's allergy to cats may
never arise.
4. There are some problems which may turn out to be difficult in
formalizing how reasoning causes action in robots. However, Lewis
Carroll's infinite regress (so much admired by Hofstadter) in modus ponens
should have been answered as follows by Achilles. "There are various
justifications of modus ponens that might be given, but the reason I use
it is that that's the way I'm built".
5. Of all the papers, Davidson's was the most concrete from the AI
point of view. It might just be possible to program a Davidsonian robot.
The action scanner looks for reasoned conclusions of the form "I should do
X", where X is a "primary action".
6. The key to avoiding philosophical problems is to refrain from
excessive generality. All common sense psychology consists of
"approximate theories" in the sense of my 1980 paper. This means that the
concepts fall apart if examined too closely.
References:
Berofsky, Bernard (ed.) (1966): "Free will and Determinism", Harper and
Row.
McCarthy, John and P.J. Hayes (1969): "Some Philosophical Problems from
the Standpoint of Artificial Intelligence", in D. Michie (ed), Machine
Intelligence 4, American Elsevier, New York, NY.
McCarthy, John (1979): "Ascribing Mental Qualities to Machines" in
Philosophical Perspectives in Artificial Intelligence, Ringle, Martin
(ed.), Harvester Press, July 1979. .<<aim 326, MENTAL[F76,JMC]>>
Best Regards,
John McCarthy